8 research outputs found
Reassessing The Fundamentals: New Constraints on the Evolution, Ages and Masses of Neutron Stars
The ages and masses of neutron stars (NSs) are two fundamental threads that
make pulsars accessible to other sub-disciplines of astronomy and physics. A
realistic and accurate determination of these two derived parameters play an
important role in understanding of advanced stages of stellar evolution and the
physics that govern relevant processes. Here I summarize new constraints on the
ages and masses of NSs with an evolutionary perspective. I show that the
observed P-Pdot demographics is more diverse than what is theoretically
predicted for the standard evolutionary channel. In particular, standard
recycling followed by dipole spin-down fails to reproduce the population of
millisecond pulsars with higher magnetic fields (B > 4 x 10^{8} G) at rates
deduced from observations. A proper inclusion of constraints arising from
binary evolution and mass accretion offers a more realistic insight into the
age distribution. By analytically implementing these constraints, I propose a
"modified" spin-down age for millisecond pulsars that gives estimates closer to
the true age. Finally, I independently analyze the peak, skewness and cutoff
values of the underlying mass distribution from a comprehensive list of radio
pulsars for which secure mass measurements are available. The inferred mass
distribution shows clear peaks at 1.35 Msun and 1.50 Msun for NSs in double
neutron star (DNS) and neutron star-white dwarf (NS-WD) systems respectively. I
find a mass cutoff at 2 Msun for NSs with WD companions, which establishes a
firm lower bound for the maximum mass of NSs.Comment: 4 pages, 4 figures; To appear in the AIP proceedings of "Astrophysics
of Neutron Stars-2010", eds. E. Gogus, T. Belloni, U. Erta
Do all millisecond pulsars share a common heritage?
The discovery of millisecond pulsations from neutron stars in low mass X-ray
binary (LMXB) systems has substantiated the theoretical prediction that links
millisecond radio pulsars (MSRPs) and LMXBs. Since then, the process that
produces millisecond radio pulsars from LMXBs, followed by spin-down due to
dipole radiation has been conceived as the 'standard evolution' of millisecond
pulsars. However, the question whether all the observed millisecond radio
pulsars could be produced by LMXBs has not been quantitatively addressed until
now.
The standard evolutionary process produces millisecond pulsars with periods
(P) and spin-down rates (Pdot) that are not entirely independent. The possible
P-Pdot values that millisecond radio pulsars can attain are jointly
constrained. In order to test whether the observed millisecond radio pulsars
are the unequivocal descendants of millisecond X-ray pulsars (MSXP), we have
produced a probability map that represents the expected distribution of
millisecond radio pulsars for the standard model. We show with more than 95 %
confidence that the fastest spinning millisecond radio pulsars with high
magnetic fields, e.g. PSR B1937+21, cannot be produced by the observed
millisecond X-ray pulsars within the framework of the standard model.Comment: Full resolution color figures available at:
http://www.kiziltan.org/research.html. To appear in the American Institute of
Physics (AIP) proceedings, 8 pages, 2 figures, 1 tabl
Topology-Aware Focal Loss for 3D Image Segmentation
The efficacy of segmentation algorithms is frequently compromised by
topological errors like overlapping regions, disrupted connections, and voids.
To tackle this problem, we introduce a novel loss function, namely
Topology-Aware Focal Loss (TAFL), that incorporates the conventional Focal Loss
with a topological constraint term based on the Wasserstein distance between
the ground truth and predicted segmentation masks' persistence diagrams. By
enforcing identical topology as the ground truth, the topological constraint
can effectively resolve topological errors, while Focal Loss tackles class
imbalance. We begin by constructing persistence diagrams from filtered cubical
complexes of the ground truth and predicted segmentation masks. We subsequently
utilize the Sinkhorn-Knopp algorithm to determine the optimal transport plan
between the two persistence diagrams. The resultant transport plan minimizes
the cost of transporting mass from one distribution to the other and provides a
mapping between the points in the two persistence diagrams. We then compute the
Wasserstein distance based on this travel plan to measure the topological
dissimilarity between the ground truth and predicted masks. We evaluate our
approach by training a 3D U-Net with the MICCAI Brain Tumor Segmentation
(BraTS) challenge validation dataset, which requires accurate segmentation of
3D MRI scans that integrate various modalities for the precise identification
and tracking of malignant brain tumors. Then, we demonstrate that the quality
of segmentation performance is enhanced by regularizing the focal loss through
the addition of a topological constraint as a penalty term
EEG-NeXt: A Modernized ConvNet for The Classification of Cognitive Activity from EEG
One of the main challenges in electroencephalogram (EEG) based brain-computer
interface (BCI) systems is learning the subject/session invariant features to
classify cognitive activities within an end-to-end discriminative setting. We
propose a novel end-to-end machine learning pipeline, EEG-NeXt, which
facilitates transfer learning by: i) aligning the EEG trials from different
subjects in the Euclidean-space, ii) tailoring the techniques of deep learning
for the scalograms of EEG signals to capture better frequency localization for
low-frequency, longer-duration events, and iii) utilizing pretrained ConvNeXt
(a modernized ResNet architecture which supersedes state-of-the-art (SOTA)
image classification models) as the backbone network via adaptive finetuning.
On publicly available datasets (Physionet Sleep Cassette and BNCI2014001) we
benchmark our method against SOTA via cross-subject validation and demonstrate
improved accuracy in cognitive activity classification along with better
generalizability across cohorts
ToDD: Topological Compound Fingerprinting in Computer-Aided Drug Discovery
In computer-aided drug discovery (CADD), virtual screening (VS) is used for
identifying the drug candidates that are most likely to bind to a molecular
target in a large library of compounds. Most VS methods to date have focused on
using canonical compound representations (e.g., SMILES strings, Morgan
fingerprints) or generating alternative fingerprints of the compounds by
training progressively more complex variational autoencoders (VAEs) and graph
neural networks (GNNs). Although VAEs and GNNs led to significant improvements
in VS performance, these methods suffer from reduced performance when scaling
to large virtual compound datasets. The performance of these methods has shown
only incremental improvements in the past few years. To address this problem,
we developed a novel method using multiparameter persistence (MP) homology that
produces topological fingerprints of the compounds as multidimensional vectors.
Our primary contribution is framing the VS process as a new topology-based
graph ranking problem by partitioning a compound into chemical substructures
informed by the periodic properties of its atoms and extracting their
persistent homology features at multiple resolution levels. We show that the
margin loss fine-tuning of pretrained Triplet networks attains highly
competitive results in differentiating between compounds in the embedding space
and ranking their likelihood of becoming effective drug candidates. We further
establish theoretical guarantees for the stability properties of our proposed
MP signatures, and demonstrate that our models, enhanced by the MP signatures,
outperform state-of-the-art methods on benchmark datasets by a wide and highly
statistically significant margin (e.g., 93% gain for Cleves-Jain and 54% gain
for DUD-E Diverse dataset).Comment: NeurIPS, 2022 (36th Conference on Neural Information Processing
Systems
EEG-NEXT: A MODERNIZED CONVNET FOR THE CLASSIFICATION OF COGNITIVE ACTIVITY FROM EEG
One of the main challenges in electroencephalogram (EEG) based brain-computer interface (BCI) systems is learning the subject/session invariant features to classify cognitive activities within an end-to-end discriminative setting. We propose a novel end-to-end machine learn- ing pipeline, EEG-NeXt, which facilitates transfer learning by: i) aligning the EEG trials from different subjects in the Euclidean-space, ii) tailoring the techniques of deep learning for the scalograms of EEG signals to capture better frequency localization for low-frequency, longer-duration events, and iii) utilizing pretrained ConvNeXt (a mod- ernized ResNet architecture which supersedes state-of-the-art (SOTA) image classification models) as the backbone network via adaptive finetuning. On publicly available datasets (Physionet Sleep Cassette and BNCI2014001) we benchmark our method against SOTA via cross-subject validation and demonstrate improved accuracy in cog- nitive activity classification along with better generalizability across cohorts